Knowledge editing

See also Training data leakage and memorization in language models

Can we remove or alter “knowledge” stored in LLMs? Can we propagate the edited knowledge to associated ones?